29 research outputs found

    Optimal association of mobile users to multi-access edge computing resources

    Get PDF
    Multi-access edge computing (MEC) plays a key role in fifth-generation (5G) networks in bringing cloud functionalities at the edge of the radio access network, in close proximity to mobile users. In this paper we focus on mobile-edge computation offloading, a way to transfer heavy demanding, and latency-critical applications from mobile handsets to close-located MEC servers, in order to reduce latency and/or energy consumption. Our goal is to provide an optimal strategy to associate mobile users to access points (AP) and MEC hosts, while contextually optimizing the allocation of radio and computational resources to each user, with the objective of minimizing the overall user transmit power under latency constraints incorporating both communication and computation times. The overall problem is a mixed-binary problem. To overcome its inherent computational complexity, we propose two alternative strategies: i) a method based on successive convex approximation (SCA) techniques, proven to converge to local optimal solutions; ii) an approach hinging on matching theory, based on formulating the assignment problem as a matching game

    The edge cloud: A holistic view of communication, computation and caching

    Get PDF
    The evolution of communication networks shows a clear shift of focus from just improving the communications aspects to enabling new important services, from Industry 4.0 to automated driving, virtual/augmented reality, Internet of Things (IoT), and so on. This trend is evident in the roadmap planned for the deployment of the fifth generation (5G) communication networks. This ambitious goal requires a paradigm shift towards a vision that looks at communication, computation and caching (3C) resources as three components of a single holistic system. The further step is to bring these 3C resources closer to the mobile user, at the edge of the network, to enable very low latency and high reliability services. The scope of this chapter is to show that signal processing techniques can play a key role in this new vision. In particular, we motivate the joint optimization of 3C resources. Then we show how graph-based representations can play a key role in building effective learning methods and devising innovative resource allocation techniques.Comment: to appear in the book "Cooperative and Graph Signal Pocessing: Principles and Applications", P. Djuric and C. Richard Eds., Academic Press, Elsevier, 201

    Dynamic edge computing empowered by reconfigurable intelligent surfaces

    Get PDF
    In this paper, we propose a novel algorithm for energy-efficient low-latency dynamic mobile edge computing (MEC), in the context of beyond 5G networks endowed with reconfigurable intelligent surfaces (RISs). We consider a scenario where new computing requests are continuously generated by a set of devices and are handled through a dynamic queueing system. Building on stochastic optimization tools, we devise a dynamic learning algorithm that jointly optimizes the allocation of radio resources (i.e., power, transmission rates, sleep mode and duty cycle), computation resources (i.e., CPU cycles), and RIS reflectivity parameters (i.e., phase shifts), while guaranteeing a target performance in terms of average end-to-end delay. The proposed strategy enables dynamic control of the system, performing a low-complexity optimization on a per-slot basis while dealing with time-varying radio channels and task arrivals, whose statistics are unknown. The presence and optimization of RISs helps boosting the performance of dynamic MEC, thanks to the capability to shape and adapt the wireless propagation environment. Numerical results assess the performance in terms of service delay, learning, and adaptation capabilities of the proposed strategy for RIS-empowered MEC

    Wireless Edge Machine Learning: Resource Allocation and Trade-Offs

    Get PDF
    The aim of this paper is to propose a resource allocation strategy for dynamic training and inference of machine learning tasks at the edge of the wireless network, with the goal of exploring the trade-off between energy, delay and learning accuracy. The scenario of interest is composed of a set of devices sending a continuous flow of data to an edge server that extracts relevant information running online learning algorithms, within the emerging framework known as Edge Machine Learning (EML). Taking into account the limitations of the edge servers, with respect to a cloud, and the scarcity of resources of mobile devices, we focus on the efficient allocation of radio (e.g., data rate, quantization) and computation (e.g., CPU scheduling) resources, to strike the best trade-off between energy consumption and quality of the EML service, including service end-to-end (E2E) delay and accuracy of the learning task. To this aim, we propose two different dynamic strategies: (i) The first method aims to minimize the system energy consumption, under constraints on E2E service delay and accuracy; (ii) the second method aims to optimize the learning accuracy, while guaranteeing an E2E delay and a bounded average energy consumption. Then, we present a dynamic resource allocation framework for EML based on stochastic Lyapunov optimization. Our low-complexity algorithms do not require any prior knowledge on the statistics of wireless channels, data arrivals, and data probability distributions. Furthermore, our strategies can incorporate prior knowledge regarding the model underlying the observed data, or can work in a totally data-driven fashion. Several numerical results on synthetic and real data assess the performance of the proposed approach

    Goal-oriented Communications for the IoT: System Design and Adaptive Resource Optimization

    Full text link
    Internet of Things (IoT) applications combine sensing, wireless communication, intelligence, and actuation, enabling the interaction among heterogeneous devices that collect and process considerable amounts of data. However, the effectiveness of IoT applications needs to face the limitation of available resources, including spectrum, energy, computing, learning and inference capabilities. This paper challenges the prevailing approach to IoT communication, which prioritizes the usage of resources in order to guarantee perfect recovery, at the bit level, of the data transmitted by the sensors to the central unit. We propose a novel approach, called goal-oriented (GO) IoT system design, that transcends traditional bit-related metrics and focuses directly on the fulfillment of the goal motivating the exchange of data. The improvement is then achieved through a comprehensive system optimization, integrating sensing, communication, computation, learning, and control. We provide numerical results demonstrating the practical applications of our methodology in compelling use cases such as edge inference, cooperative sensing, and federated learning. These examples highlight the effectiveness and real-world implications of our proposed approach, with the potential to revolutionize IoT systems.Comment: Accepted for publication on IEEE Internet of Things Magazine, special issue on "Task-Oriented Communications and Networking for the Internet of Things

    Overbooking radio and computation resources in mmW-mobile edge computing to reduce vulnerability to channel intermittency

    No full text
    One of the key features of 5G roadmap is mobile edge computing (MEC), as an effective way to bring information technology (IT) services close to the mobile user. Moving computation and caching resources at the edge of the access network enables low latency and high reliability services, as required in many of the verticals associated to 5G, such as Industry 4.0 or automated driving. Merging MEC with millimeter wave (mmW) communications provides a further thrust to enable low latency and high reliability services, thanks to the high data rate of mmW links and the ability to handle interference through massive beamforming. However, mmW links are prone to blocking events, which could limit the effectiveness of the mmW-MEC deployment. To overcome blocking events and robustify mmW-MEC, in this paper we propose and analyze the performance of two strategies to overcome the effect of blocking: i) overbooking of computation and communication resources, based on the statistics of blocking events, and ii) adopting multi-link communications

    Dynamic joint resource allocation and user assignment in multi-access edge computing

    No full text
    Multi-Access Edge Computing (MEC) is one of the key technology enablers of the 5G ecosystem, in combination with the high speed access provided by mmWave communications. In this paper, among all services enabled by MEC, we focus on computation offloading, devising an algorithm to optimize computation and communication resources jointly with the assignment of mobile users to Access Points and Mobile Edge Hosts, in a dynamic scenario where computation tasks are continuously generated according to (unknown) random arrival processes at each user. To formulate and solve the dynamic allocation/assignment problem, we merge tools from stochastic optimization and matching theory, thus developing a low complexity algorithmic solution that works in an online fashion. Numerical results illustrate the potential advantages of the proposed approach

    Dynamic Resource Allocation for Wireless Edge Machine Learning with Latency And Accuracy Guarantees

    No full text
    In this paper, we address the problem of dynamic allocation of communication and computation resources for Edge Machine Learning (EML) exploiting Multi-Access Edge Computing (MEC). In particular, we consider an IoT scenario, where sensor devices collect data from the environment and upload them to an edge server that runs a learning algorithm based on Stochastic Gradient Descent (SGD). The aim is to explore the optimal tradeoff between the overall system energy consumption, including IoT devices and edge server, the overall service latency, and the learning accuracy. Building on stochastic optimization tools, we devise an algorithm that jointly allocates radio and computation resources in a dynamic fashion, without requiring prior knowledge of the statistics of the channels, task arrivals, and input data. Finally, we test our algorithm in the specific case the edge server runs a Least Mean Squares (LMS) algorithm on the data acquired by each sensor device
    corecore